382 research outputs found
COMMUNICATION INTERFACE PROXIMITY AND USER ANXIETY: COMPARING DESKTOP, LAPTOP, AND HAND-HELD DEVICES AS MEDIA PLATFORMS FOR EMERGENCY ALERTS
This study is an experiment investigating the effects of communication interface proximity on college students' anxiety when they receive the alerts about on-campus crimes via e-mails and text messages. It proposes a new dimension for the traditional concept of proximity in journalism and suggests a shift in the emphasis of proximity from audience-to-event to user-to-interface. It draws the theoretical framework from multiple disciplines: human-computer interaction research, the information processing model, media effects research, as well as the psychological research of anxiety.
A total of 97 college students in a large mid-Atlantic university participated in this experiment. Communication interface proximity was conceptualized as three different media platforms: desktop computer (stationary), laptop computer (portable), and hand-held device (mobile). The students were assigned to one of the three device groups based on their self-reported computer usage and received four crime alerts per day for two days through one of the devices. They were required to carry a Self-Assessment Manikin (SAM) pictorial scale during the experiment and reply to the alerts as soon as possible using the SAM and felt anxiety scales. They also filled out an online questionnaire at the beginning of the study, at the end of the first day, and at the end of the study, respectively.
Subjects who received the crime alerts on hand-held devices reported higher anxiety upon alert receipt than those receiving the alerts on desktop or laptop computers. Anxiety, valence, and arousal reported upon alert receipt for the laptop and desktop groups decreased significantly in early day two, suggesting an "overnight effect" of the crime alerts on these two groups. However, the hand-held group still reported a high level of anxiety upon alert receipt in early day two, suggesting the ubiquitous hand-held device is just under our skin, with no "down time".
This study also found that anxiety predicted latency time of response to the alerts and memory for the crime alerts, indicating that anxiety serves as an adaptive heuristic in an emergency and helps people allocate their limited cognitive mental resources, as suggested by the information processing model
The impact of social capital on bank risk-taking
The concept of “social capital” has received considerable attention these years. Yet, few studies have explored the connections between social capital and bank risk-taking. In this study, I discuss the theory of social capital and its relevance to financial market behavior, and then I analyze the relationship between social capital and bank risk-taking across countries. To measure social capital, I follow Knack and Keefer (1997) and use the data of trust and civic norms collected from the World Values Survey. My measure of bank risk-taking is the nature logarithm of Z-score of each bank. Empirical results show that bank risk-taking is lower in countries where social capital is higher. It is also shown that the impact of social capital is stronger when the level of education in the country is lower. This paper investigates the negative impact of social capital on non-performing loan as well
Predicting Facebook addiction and state anxiety without Facebook by gender, trait anxiety, Facebook intensity, and different Facebook activities
Background and aims: Although social networking sites brought giant convenience, many negative effects on users’ psychological well-being need more investigation. This study used a survey to examine Facebook addiction and state anxiety without Facebook. As research shows gender is related to trait anxiety and may interact with trait anxiety to influence state anxiety, we also assess the interaction effect between gender and trait anxiety. Methods: A total of 526 college students in the US participated in the survey. A systematic sampling method was used and an e-mail invitation with the link of the survey was sent to every third student on the students’ e-mail list. Study measures included demographics, trait anxiety, Facebook intensity, different Facebook activities, Facebook addiction, and state anxiety without Facebook. Hierarchical multiple regression was run to test how trait anxiety, gender, Facebook intensity, and different types of Facebook activities predict Facebook addiction and state anxiety. Results: Facebook use intensity predicts Facebook addiction (β = 0.573, p < .001) and state anxiety (β = 0.567, p < .001). Facebook use for broadcasting positively predicts Facebook addiction (β = 0.200, p < .01) and state anxiety (β = 0.171, p < .01). Trait anxiety positively predicts Facebook addiction (β = 0.121, p < .05) and state anxiety (β = 0.119, p < .05). Gender interacts with trait anxiety and jointly predicts Facebook addiction (β = 0.201, p < .01). Discussion and conclusions: Trait anxiety, Facebook intensity, and broadcasting behavior on Facebook positively predict Facebook addiction and state anxiety. Moreover, gender interacts with trait anxiety, so that the gender difference in Facebook addiction is significant only when trait anxiety is low
A Manifold Two-Sample Test Study: Integral Probability Metric with Neural Networks
Two-sample tests are important areas aiming to determine whether two
collections of observations follow the same distribution or not. We propose
two-sample tests based on integral probability metric (IPM) for
high-dimensional samples supported on a low-dimensional manifold. We
characterize the properties of proposed tests with respect to the number of
samples and the structure of the manifold with intrinsic dimension .
When an atlas is given, we propose two-step test to identify the difference
between general distributions, which achieves the type-II risk in the order of
. When an atlas is not given, we propose H\"older IPM test
that applies for data distributions with -H\"older densities, which
achieves the type-II risk in the order of . To mitigate the
heavy computation burden of evaluating the H\"older IPM, we approximate the
H\"older function class using neural networks. Based on the approximation
theory of neural networks, we show that the neural network IPM test has the
type-II risk in the order of , which is in the same order of
the type-II risk as the H\"older IPM test. Our proposed tests are adaptive to
low-dimensional geometric structure because their performance crucially depends
on the intrinsic dimension instead of the data dimension.Comment: 32 pages, 2 figures, 3 tables. Accepted by Information and Inference:
A Journal of the IM
An investigation on the Factors Influencing the dissemination of WeChat Push Based on HSM and the Prediction of its Content Hotspot
With the continuous development of information technology, the carrier of we-media has emerged. The WeChat Subscription Accounts has quickly led the other we-media platforms. During the six years of its emergence, WeChat Subscription Accounts have attracted a lot of traffic and brought huge profit margins. Based on the above background, this study combines the heuristic-systematic model of information processing to classify the heuristic and systematic factors that influence the dissemination of WeChat push. Analyze the factors affecting WeChat push transmission, supplement relevant theories, and provide suggestions for WeChat Subscription Accounts operators
Could mergers become more sustainable? A study of the stock exchange mergers of NASDAQ and OMX
This study investigates whether the merger of NASDAQ and OMX could reduce the portfolio diversification possibilities for stock market investors and whether it is necessary to implement national policies and international treaties for the sustainable development of financial markets. Our study is very important because some players in the stock markets have not yet realized that stock exchanges, during the last decades, have moved from government-owned or mutually-owned organizations to private companies, and, with several mergers having occurred, the market is tending gradually to behave like a monopoly. From our analysis, we conclude that increased volatility and reduced diversification opportunities are the results of an increase in the long-run comovement between each pair of indices in Nordic and Baltic stock markets (Denmark, Sweden, Finland, Estonia, Latvia, and Lithuania) and NASDAQ after the merger. We also find that the merger tends to improve the error-correction mechanism for NASDAQ so that it Granger-causes OMX, but OMX loses predictive power on NASDAQ after the merger. We conclude that the merger of NASDAQ and OMX reduces the diversification possibilities for stock market investors and our findings provide evidence to support the argument that it is important to implement national policies and international treaties for the sustainable development of financial markets
Surface Analysis of Graphene and Graphite
Graphene and graphite are two widely studied carbon materials. Due to their particular properties and structure, graphene and graphite have been used in a variety of fields such as electronic devices and sensors. The surface properties of graphene and graphite as well as their derivatives are strongly connected to the performances of devices and sensors. Thus, it is necessary to choose appropriate surface analysis techniques for characterization, which are not only useful in the understanding of the surface composition and structure but also in the design and development of these types of materials. X-ray photoelectron spectroscopy (XPS) and time-of-flight secondary ion mass spectrometry (ToF-SIMS) have been two of the key surface analysis techniques that are widely used to characterize these surfaces. In this chapter, an overview of the applications of XPS and ToF-SIMS in the study of the surfaces of graphene and graphite is present. We hope that the information provided will simulate more exciting and inspiring research on graphene and graphite and promote practical applications of these carbon materials in the future
Timely Fusion of Surround Radar/Lidar for Object Detection in Autonomous Driving Systems
Fusing Radar and Lidar sensor data can fully utilize their complementary
advantages and provide more accurate reconstruction of the surrounding for
autonomous driving systems. Surround Radar/Lidar can provide 360-degree view
sampling with the minimal cost, which are promising sensing hardware solutions
for autonomous driving systems. However, due to the intrinsic physical
constraints, the rotating speed of surround Radar, and thus the frequency to
generate Radar data frames, is much lower than surround Lidar. Existing
Radar/Lidar fusion methods have to work at the low frequency of surround Radar,
which cannot meet the high responsiveness requirement of autonomous driving
systems.This paper develops techniques to fuse surround Radar/Lidar with
working frequency only limited by the faster surround Lidar instead of the
slower surround Radar, based on the state-of-the-art object detection model
MVDNet. The basic idea of our approach is simple: we let MVDNet work with
temporally unaligned data from Radar/Lidar, so that fusion can take place at
any time when a new Lidar data frame arrives, instead of waiting for the slow
Radar data frame. However, directly applying MVDNet to temporally unaligned
Radar/Lidar data greatly degrades its object detection accuracy. The key
information revealed in this paper is that we can achieve high output frequency
with little accuracy loss by enhancing the training procedure to explore the
temporal redundancy in MVDNet so that it can tolerate the temporal unalignment
of input data. We explore several different ways of training enhancement and
compare them quantitatively with experiments.Comment: Accepted at DATE 202
DRL-GAN: dual-stream representation learning GAN for low-resolution image classification in UAV applications.
Identifying tiny objects from extremely low resolution (LR) UAV-based remote sensing images is generally considered as a very challenging task, because of very limited information in the object areas. In recent years, there have been very limited attempts to approach this problem. These attempts intend to deal with LR image classification by enhancing either the poor image quality or image representations. In this paper, we argue that the performance improvement in LR image classification is affected by the inconsistency of the information loss and learning priority on Low-Frequency (LF) components and High-Frequency (HF) components. To address this LF-HF inconsistency problem, we propose a Dual-Stream Representation Learning Generative Adversarial Network (DRL-GAN).The core idea is to produce super image representations optimal for LR recognition by simultaneously recovering the missing information in LF and HF components, respectively, under the guidance of high-resolution (HR) images. We evaluate the performance of DRL-GAN on the challenging task of LR image classification. A comparison of the experimental results on the LR benchmark, namely HRSC and CIFAR-10, and our newly collected “WIDER-SHIP” dataset demonstrates the effectiveness of our DRL-GAN, which significantly improves the classification performance, with up to 10% gain on average
- …